Noise Model for Two-Photon Microscopy Data

This model was derived using deep insight into the physical processes involved in obtaining data via the photomultiplier (PMT) technique. I only copy the results here. See [2] for more detailed information


Let $x \in X \subset \mathbb{Z}^2$ be spatial coordinates and $z(x): X \rightarrow \mathbb{R}^+$ represents an image obtained with PMT. For each $x \in X$, we model $z(x)$ as a Gaussian random variable:

$$z(x) = y(x) + \sigma( y(x) )\xi(x)$$

where $y(x) = E\{z(x)\}$, and $\xi(x)$ is noise in the range $\{0,1\}$.

$$\sigma^2(y(x))=\text{var}\{z(x)\}=ay(x)$$

If we extend this model considering affine rather than linear dependency we will be able to account variance from other noise components which are signal independent (i.e. dark current noise). So we add an extra term, $b$, to account for the translations.

$$\sigma^2(y(x))=ay(x)+b$$

The same heteroscedastic approximation above is typically used to model image data acquired with digital cameras. It means that techniques developed for denoising raw camera images can be applied to the images obtained with PMTs


Full Spatial Denoising Procedure (prototype)

  • Determine the noise parameters in the model using [7] (implementation: http://www.cs.tut.fi/~foi/sensornoise.html)
    • The variance of the noise is a necessary parameter in the Kalman Filter which can be used to improve temporal resolution after filtering with this method.
  • Use a Variance Stabilizing Transform (VST) to transform the heteroscedastic data into homoscedastic data [6]
    • The data obtained via the two-photon microscopy technique is clipped. (See "Clipping" below)
  • Apply the BM3D filter in [1] (implementation: http://www.cs.tut.fi/~foi/GCF-BM3D/)
    • It is designed for denoising data corrupted by independent identically distributed (i.i.d) Gaussian noise. The noise in the raw microscopy data is not identically distributed because of the clipping, hence the previous step.
  • Apply the exact unbiased inverse VST to yield the denoised data [6]

Clipping

With the intention of making full use of the rather limited dynamic range of digital sensors, pictures are usually taken with some areas purposely overexposed or clipped, i.e. accumulating charge beyond the full-well capacity of the individual pixels. These pixels obviously present highly nonlinear noise characteristics, which are completely different than those of normally exposed pixels. In other words, clipping occurs when the range of the aquisition system is limited so that signal values above or below the sensor's recording limitations will be "clipped", or masked to the upper or lower bounds of the aquisition system.

In our case: Two-photon microscopy suffers from clipping. There may be overexposure, but there is definitely underexposure resulting in lower bound clipping. This means we do not see the calcium dynamics in the finer processes. This is a frustrating limitation for experimentalists*.

*My summary of what James has told me

CODE


In [1]:
% matplotlib inline
from load_environment import *
data = np.array(data, dtype=np.float16)


-- Loading Data...
	- Numpy file already exists. Loading /home/ndc08/code/research/compneuro/max_planck_jupiter/nathans_project/data/ferret2152_TSeries-01292015-1540_site3_0.75ISOF_AL.npy...

Tile the first 32 images to improve the noise parameter estimation


In [3]:
h, w = data[0].shape; print data[0].shape; print data.dtype
tiledData = np.ndarray((4*h,8*w))
for frameNum in range(32):
    frame = data[frameNum]
    row = frameNum/8
    col = frameNum%8
    tiledData[row*h:(row+1)*h,col*w:(col+1)*w] = frame.copy()
from PIL import Image
im = Image.fromarray(tiledData)
im.show()
#im.save("tiledFerretData.tiff")


(262, 256)
float16

In [ ]: